Free Download Sign-up Form
* Email
First Name
* = Required Field


Mind Your Head Brain Training Book by Sue Stebbins and Carla Clark
New!
by Sue Stebbins &
Carla Clark

Paperback Edition

Kindle Edition

Are You Ready to Breakthrough to Freedom?
Find out
Take This Quiz

Business Breakthrough CDs

Over It Already

Amazing Clients
~ Ingrid Dikmen Financial Advisor, Senior Portfolio Manager


~ Mike M - Finance Professional

Social Media Sue Stebbins on Facebook

Visit Successwave's Blog!

Subscribe to the Successwaves RSS Feed

Cortical Integration: Possible Solutions to the Binding and Linking Problems in Perception, Reasoning and Long Term Memory

Nick Bostrom

1 | 2 | 3 | 4 | 5 | 6 | 7

Page 6

Source: http://www.nickbostrom.com/old/cortical.html

6. A sketch for a simulation model that uses annexation to deal with the integration problem

6.1 Synopsis
This section presents the outlines of a simulation study of a neural network that would be able to understand simple language, discover regularities and law-like connections in its environment, form hypotheses about these and test them, and communicate verbally the result of its investigations. The level of abstraction is high; i.e. little attention is paid to biological detail, but the task assumed is rather to illustrate some general principles in a suggestive way: to set forth a rudimentary framework, or skeleton, that could be the starting point for a process of having it fleshed out and scaled up, to biologically plausible system, by having components added, refined or substituted. Two desiderata are given special attention: the system should handle the integration problem without assuming binding by synchronous firing, and it should manage one-shot learning, i.e. be able learn a message, e.g. a sentence, from a single presentation. The results of simulations of some initial steps are also given at the end of this section.2

6.2 Overview of the system3
A simple toy world exhibiting regularities is observed. An Attention Focuser for Observation (AFO) suggests things to look for in this world by feeding an observation sentence to the Sensory Module (SM), which determines whether the sentence is true or false by making the relevant observations. The result of this investigation is stored in Short Term Memory (STM) and might later be transferred to Long Term Memory1 (LTM1). Based upon what is stored in LTM1, a Regularities Finder (RF) will suggest hypotheses, statements about regularities in the toy world, which will be stored in Long Term Memory2 (LTM2). A second attention mechanism, the Attention Focuser for Reasoning (AFR) will be called into action to pick out items from LTM2 and from STM and feed them to a Logic Circuit (LC) which performs a simple check to see whether these items are consistent or not. If they aren't, the law-like statements from LTM2 will suffer a credibility reduction. Non-visual information can also be provided to the system by adding sentences directly to the memory modules.

All the memory modules are attractor nets, and sentences are stored as on top of each other. Each sentence representation is a concatenation of smaller patterns representing the words constituting the sentences, in appropriate order.

6.3 Toy World
The toy world was chosen to consist of 20*3 pixels, the home of up to four simultaneous objects: triangles, squares and circles. The objects were of uniform size and could not overlap. The thought was to get as simple a world as possible, yet one that had enough complexity to allow an illustration of the principles behind the system. Time was not included as a dimension of the world, although the network were to be presented with many scenes (which can be thought of as "days" in its history). The scenes could either be composed completely randomly, or they could be required to obey certain laws and regularities. One law would be that no object could be present on the left wing of the world unless there were another object of the same type present on the right wing. Statistical regularities could also be designed.

6.4 Vocabulary and Syntax
To simplify matters, sentences are of uniform length and fixed word order. Each sentence contains one binary truth function and two simpler sentences of 7 words each. Such a subsentence has the following form:

(There is no) (left-wing) (triangle) (that is to the left of) (a non) (middle) (circle).

The first and fifth places can be occupied with a blank or a negation; the second and sixth position can be filled with the words "left-wing", "right-wing", "middle" or "blank"; the third and seventh positions by "triangle", "square", "circle" or "blank"; and the forth position by "is-to-the-left-of", "is-to-the-right-of", or "and" or "not-both". The binary truth function could be either "and", "if-then", or their negations.

6.5 Training
There are two phases of the learning. In the first phase, the set-up phase, the modules are trained individually. This is done by some variant of backprop and will be a slow process. The second phase, the on-line phase, begins when the system has acquired the basic skills and concepts: it can then begin to function by storing facts in short term and long term memory and modify its law-like sentences in the light of new observation and received communications. This type of learning will be more or less instantaneous (one-shot learning).

6.6 Set-up phase
Sensory Module: Three-layered feed-forward network. Its input is composed of a scene presented on the visual input sheet together with an encoding of a sentence, and it is trained to give as output a verdict (low or high activity of the output node) on whether the sentence is true or false of the observed toy world scene.

Logic Circuit: Three-layered feed-forward network, whose task it is to perform some very simple deductions. Its takes as input a couple of the sentences stored in the short term memory and another two from the long term regularities memory. Which sentences, should later be determined by an Attention Focuser for Thinking (see below); but during the set-up training phase the sentences can be arbitrarily chosen (from the whole set of grammatically well-formed sentences). The Logic Circuit is trained to tell whether these four sentences are consistent with each other. The sort of inconsistency that the LC should be able to discover is one like that of the following pair of sentences:

(1) If there is a left-wing triangle then there is a right-wing triangle.
(2) There is a left-wing triangle and there is no right-wing triangle.

Attention Focuser for Observation: 3-layered ff net, has the task of choosing which features in the observed scene to pay attention to, i.e. which sentences should be sent to Sensory Module for evaluation. This choice should be made so as to maximize the likelihood that the information obtained form the SM will enable the Logic Circuit to falsify one of the sentences in the LTM2. Perfect performance is neither required nor expected. The Attention Focuser for Observation could be trained by giving the content of LTM2 as input and possibly also some elementary information about the present scene. This information could be obtained by adding a module (Primary Sensory Module) that recognizes the presence of, say, a geometric shape (Square, Triangle, Circle). The AFO would then give as output some sentences that concern a geometric shape which is also the topic of some of the regularities in the LTM2.

Attention Focuser for Reasoning: 3-layered ff net, gives as output two sentences from STM and two from LTM2 which are then fed to the Logic Circuit; takes as input the contents of the STM and the LTM2. The AFR is trained (with back prop), either after the LC, or else using a procedural deducer in place of the LC, so as to maximize the likelihood that the sentences which are the output are inconsistent with the present content of the STM.

Regularities Finder: has the task of finding regularities amongst the sentences in the LTM1. In the simplest case, this 3-layered ff net is trained to generate sentences with a high probability, given the content of the LTM1 (according to some statistically reasonable assignment). A more sophisticated regularities finder would also take into account the LTM2 (i.e. the present "theory") when searching for interesting regularities.

6.7 On-line phase
As explained in the Overview, the system, once set up, operates by making observations and generating hypotheses which are checked against new observations. The sentence patterns were to be stored together with a tag (which could be thought of as an additional word) that would give a value indicating the confidence level assigned by the system to the sentence. If the LC finds a contradiction, then the sentences from LTM2 that were involved get adjusted by having their credibility value lowered; otherwise it may increase slightly. If the credibility value sinks below a certain value, then the sentence might be removed from memory, or stored again with a "FALSITY" label attached4. Since all memory modules are attractor nets, learning is achieved by clamping the pattern to be stored and making a Hebbian weight update. This is done on-line.

6.8 Results
The toy world generator, a sentence generator and a sentence evaluator, as well as the observation module and the ANN memory modules were successfully programmed in the Cortex Pro environment. Instead of the attention focuser mechanisms and the logic circuit, there was initially a procedural simulacrum, a rule that removed sentences from the long term memory if they either were too often found false or else failed on some simple criteria of relevance.

Disappointingly, the success rate of the observation module failed to raise considerably above 85% within a reasonable simulation time on the available hardware. Various alternations in architecture and parameter values, as well as simplifications in the sentence structure, restriction of the vocabulary and compression of the encoding of the toy world were tried, but did not increase the reliability of the observation module's verdicts to more than about 90%. As this was judged to be too low for a meaningful development of the remaining interdependent modules, the simulations were discontinued. This result incidentally illustrates the point that was made earlier, that the rigidity of integration through convergence, in a homogeneous feed-forward net, makes it unsuitable in the context of linguistic processing and parsing, where there is a strong demand for swift recombination and repeated application of simple combinatorial rules.

 

1 | 2 | 3 | 4 | 5 | 6 | 7

We Make it Easy to Succeed
Successwaves, Intl.
Brain Based Accelerated Success Audios

Successwaves Smart Coaching Audio